Dimensionality Reduction — Notes 2
نویسنده
چکیده
1 Optimality theorems for JL Yesterday we saw for MJL that we could achieve target dimension m = O(ε −2 log N), and for DJL we could achieve m = O(ε −2 log(1/δ)). The following theorems tell us that not much improvement is possible for MJL, and for DJL we have the optimal bound. Theorem 1 ([Alo03]). For any N > 1 and ε < 1/2, there exist N + 1 points in R N such that achieving the MJL guarantee with distortion 1 + ε requires m min{n, ε −2 (log N)/ log(1/ε)}. The log(1/ε) loss in the lower bound can be removed if the map must be linear. Theorem 2 ([LN14]). For any N > 1 and ε < 1/2, there exist N O(1) points in R N such that achieving the MJL guarantee with distortion 1 + ε using a linear map requires m min{n, ε −2 log N }. For any ε, δ < 1/2, any DJL distribution must have m min{n, ε −2 log(1/δ)}.
منابع مشابه
Valiant Metric Embeddings , Dimension Reduction
In the previous lecture notes, we saw that any metric (X, d) with |X| = n can be embedded into R 2 n) under any the `1 metric (actually, the same embedding works for any `p metic), with distortion O(log n). Here, we describe an extremely useful approach for reducing the dimensionality of a Euclidean (`2) metric, while incurring very little distortion. Such dimension reduction is useful for a nu...
متن کامل2D Dimensionality Reduction Methods without Loss
In this paper, several two-dimensional extensions of principal component analysis (PCA) and linear discriminant analysis (LDA) techniques has been applied in a lossless dimensionality reduction framework, for face recognition application. In this framework, the benefits of dimensionality reduction were used to improve the performance of its predictive model, which was a support vector machine (...
متن کاملDimensionality Reduction — Notes 3
where the inf is taken over all admissible sequences. We also let dX(T ) denote the diameter of T with respect to norm ‖·‖X . For the remainder of this section we make the definitions πrx = argminy∈Tr‖y − x‖X and ∆rx = πrx− πr−1x. Throughout this section we let ‖ · ‖ denote the `2→2 operator norm in the case of matrix arguments, and the `2 norm in the case of vector arguments. Krahmer, Mendelso...
متن کاملA Monte Carlo-Based Search Strategy for Dimensionality Reduction in Performance Tuning Parameters
Redundant and irrelevant features in high dimensional data increase the complexity in underlying mathematical models. It is necessary to conduct pre-processing steps that search for the most relevant features in order to reduce the dimensionality of the data. This study made use of a meta-heuristic search approach which uses lightweight random simulations to balance between the exploitation of ...
متن کاملDiagnosis of Diabetes Using an Intelligent Approach Based on Bi-Level Dimensionality Reduction and Classification Algorithms
Objective: Diabetes is one of the most common metabolic diseases. Earlier diagnosis of diabetes and treatment of hyperglycemia and related metabolic abnormalities is of vital importance. Diagnosis of diabetes via proper interpretation of the diabetes data is an important classification problem. Classification systems help the clinicians to predict the risk factors that cause the diabetes or pre...
متن کاملImpact of linear dimensionality reduction methods on the performance of anomaly detection algorithms in hyperspectral images
Anomaly Detection (AD) has recently become an important application of hyperspectral images analysis. The goal of these algorithms is to find the objects in the image scene which are anomalous in comparison to their surrounding background. One way to improve the performance and runtime of these algorithms is to use Dimensionality Reduction (DR) techniques. This paper evaluates the effect of thr...
متن کامل